\( \newcommand{\water}{{\rm H_{2}O}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\E}{\mathbb{E}} \newcommand{\d}{\mathop{}\!\mathrm{d}} \newcommand{\grad}{\nabla} \newcommand{\T}{^\text{T}} \newcommand{\mathbbone}{\unicode{x1D7D9}} \renewcommand{\:}{\enspace} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator{\Tr}{Tr} \newcommand{\norm}[1]{\lVert #1\rVert} \newcommand{\KL}[2]{ \text{KL}\left(\left.\rule{0pt}{10pt} #1 \; \right\| \; #2 \right) } \newcommand{\slashfrac}[2]{\left.#1\middle/#2\right.} \)

Irreducible Markov chain: The whole state space is a single recurrence class

An irreducible Markov chain is a chain where the whole state space is a single recurrence class, meaning that there exist paths forward and back from every state \(\; i \;\) to every other state \(\; j \;\), \(\; i \rightsquigarrow j \;\), and back, \(\; j \rightsquigarrow i \;\).

The name irreducible comes from the following: in a chain with more than one recurrence class, or in a chain where some states are transient, after long enough the chain will get trapped into a single recurrence class, and it will never visit the rest of the states. Then, we could as well "reduce" the Markov chain by removing the states that will never be visited again, and we would still have the exact same Markov chain. Therefore:

Examples of reducible Markov chains

A chain where the whole state space is two recurrence classes is similar to two independent Markov chains, and whether we run one or the other will depend on the location of the first state.

A chain with a single recurrence class and additional transient states will end up visiting only states in the recurrence class, so we may as well remove the transient states.